-
Notifications
You must be signed in to change notification settings - Fork 1.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
terraform/aws: remove option to use an existing vpc in aws #654
terraform/aws: remove option to use an existing vpc in aws #654
Conversation
Needs a rebase? |
I strongly believe we need to keep both the CIDR and the existing VPC support. In fact, we may want to enhance it a bit to use all or a subset of AZs. Can we instead fix/protect the destroy logic? |
/retest We are going to punt on this functionality until after 4.0. |
@staebler: The following test failed, say
Full PR test history. Your PR dashboard. Please help us cut down on flakes by linking to an open issue when you hit one in your PR. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here. |
Needs a rebase :). |
@wking can you take another look otherwise this is /approve |
data/data/aws/bootstrap/main.tf
Outdated
@@ -109,7 +109,7 @@ resource "aws_instance" "bootstrap" { | |||
subnet_id = "${var.subnet_id}" | |||
user_data = "${data.ignition_config.redirect.rendered}" | |||
vpc_security_group_ids = ["${var.vpc_security_group_ids}"] | |||
associate_public_ip_address = "${var.associate_public_ip_address}" | |||
associate_public_ip_address = "true" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think we want to drop the quotes, see here.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@wking I thought that I had fixed that already. I'll get that done.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Oh, that's a different one. I fixed the one in master.
data/data/aws/main.tf
Outdated
@@ -1,7 +1,7 @@ | |||
locals { | |||
private_endpoints = "${var.aws_endpoints == "public" ? false : true}" | |||
public_endpoints = "${var.aws_endpoints == "private" ? false : true}" | |||
private_zone_id = "${var.aws_external_private_zone != "" ? var.aws_external_private_zone : join("", aws_route53_zone.int.*.zone_id)}" | |||
private_zone_id = "${join("", aws_route53_zone.int.*.zone_id)}" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do we know what this join
is about? I'd expect "${aws_route53_zone.int.zone_id}"
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@wking The join
is used to convert the list to a single value, knowing that the list will only have a single element. With the changes in this PR, the join
is no longer necessary aws_route53_zone
is no longer a collection of resources: it is a single resource.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
And to be clear, the code in the PR is what you are expecting "${aws_route53_zone.int.zone_id}"
. Your comment is using an outdated version.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ah, I had been going through commit by commit. Maybe you can squash the join
removal further back. Or just leave it, since it gets fixed by the end of the PR.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I try to make the end result of each commit a working product. After that commit, the join
is still needed since at that point aws_route53_zone
is still a collection of resources.
For the limited scope of the installer, we do not want the user to have the ability to share the VPC between clusters. A shared VPC could potentially be deleted when destroying one of the clusters, leaving the rest of the clusters using the shared VPC in an unusable state. Fixes https://jira.coreos.com/browse/CORS-873
With the removal of the option to use an external VPC, the following variables are not used and are removed. aws_external_master_subnet_ids aws_external_private_zone aws_external_worker_subnet_ids https://jira.coreos.com/browse/CORS-873
aws_endpoints aws_installer_role aws_master_custom_subnets aws_master_extra_sg_ids aws_worker_custom_subnets https://jira.coreos.com/browse/CORS-873
/lgtm |
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: abhinavdahiya, crawford, staebler, wking The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
Catching up with openshift/installer@6f55e673 (terraform/aws: remove option to use an existing vpc in aws, 2018-11-11, openshift/installer#654).
Centralize extra-tag inclusion on aws/main.tf. This reduces the number of places we need to think about what tags should be ;). Also keep kubernetes.io/cluster/{name} localized in the aws module. See 8a37f72 (modules/aws/bootstrap: Pull AWS bootstrap setup into a module, 2018-09-05, openshift#217) for why we need to keep it on the bootstrap instance. But the bootstrap resources will be removed after the bootstrap-complete event comes through, and we don't want Kubernetes controllers trying to pick them up. This commit updates the internal Route 53 zone from KubernetesCluster to kubernetes.io/cluster/{name}: owned, catching it up to kubernetes/kubernetes@0b5ae539 (AWS: Support shared tag, 2017-02-18, kubernetes/kubernetes#41695). That tag originally landed on the zone back in 75fb49a (platforms/aws: apply tags to internal route53 zone, 2017-05-02, coreos/tectonic-installer#465). Only the master instances need the clusterid tag, as described in 6c7a5f0 (Tag master machines for adoption by machine controller, 2018-10-17, openshift#479). A number of VPC resources have moved from "shared" to "owned". The shared values are from 45dfc2b (modules/aws,azure: use the new tag format for k8s 1.6, 2017-05-04, coreos/tectonic-installer#469). The commit message doesn't have much to say for motivation, but Brad Ison said [1]: I'm not really sure if anything in Kubernetes actually uses the owned vs. shared values at the moment, but in any case, it might make more sense to mark subnets as shared. That was actually one of the main use cases for moving to this style of tagging -- being able to share subnets between clusters. But we aren't sharing these resources; see 6f55e67 (terraform/aws: remove option to use an existing vpc in aws, 2018-11-11, openshift#654). [1]: coreos/tectonic-installer#469 (comment)
For the limited scope of the installer, we do not want the user to
have the ability to share the VPC between clusters. A shared VPC
could potentially be deleted when destroying one of the clusters,
leaving the rest of the clusters using the shared VPC in an unusable
state.
Fixes https://jira.coreos.com/browse/CORS-873